Meta engineers estimate that next year, there will be an addition of 1 to 2 million Nvidia H100 graphics processors for global AI inference. If all of them are used for language model generation, the energy consumption would fall within a reasonable range, generating 100,000 tokens per person per day. Only two additional nuclear power plants would be required to provide sufficient power for global AI inference. Model training faces limitations due to data scarcity, and researchers are working to improve model efficiency. In the next 3-4 years, it is expected that we will know whether current technology can achieve general AI.